Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent's dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.
translated by 谷歌翻译
我们为对抗性多机器人群众跨任务中的决策制定开发了一个有弹性的二进制假设测试框架。该框架利用机器人之间的随机信任观察,以在集中式融合中心(FC)中得出可进行的弹性决策,即使I)在网络中存在恶意机器人,其数量可能大于合法机器人的数量,并且II )FC使用所有机器人的一次性噪声测量。我们得出两种算法来实现这一目标。第一个是两个阶段方法(2SA),该方法基于收到的信任观察估算机器人的合法性,并证明在最严重的恶意攻击中可最大程度地减少检测错误的可能性。在这里,恶意机器人的比例是已知但任意的。对于不明的恶意机器人,我们开发了对抗性的广义似然比测试(A-GLRT),该测试(A-GLRT)都使用报告的机器人测量和信任观察来估计机器人的可信赖性,其报告策略以及同时的正确假设。我们利用特殊的问题结构表明,尽管有几个未知的问题参数,但这种方法仍然可以计算处理。我们在硬件实验中部署了这两种算法,其中一组机器人会在模拟道路网络上进行交通状况的人群,但仍会受到SYBIL攻击的方式。我们从实际通信信号中提取每个机器人的信任观察结果,这些信号提供有关发件人独特性的统计信息。我们表明,即使恶意机器人在大多数情况下,FC也可以将检测误差的可能性降低到2SA和A-GLRT的30.5%和29%。
translated by 谷歌翻译
高维模型通常具有较大的内存足迹,必须在训练后进行量化,然后将其部署在资源受限的边缘设备上以进行推理任务。在这项工作中,我们开发了一个信息理论框架,用于量化从训练数据$(\ mathbf {x},\ mathbf {y})$的线性回归剂的问题,用于某些基本统计关系$ \ mathbf {y} = \ Mathbf {X} \ BoldSymbol {\ Theta} + \ Mathbf {V} $。博学的模型是对潜在参数$ \ boldsymbol {\ theta} \ in \ mathbb {r}^d $的估计值,仅使用$ bd $ bits来代表,其中$ b \ in(0,in 0,0,in(0) \ infty)$是预先指定的预算,$ d $是维度。在此设置下,我们为Minimax风险提供了信息理论的下限,并建议使用基于嵌入的算法进行匹配的上限,该算法紧密到恒定因素。上限和上限共同表征了达到与未量化设置相当的性能风险所需的最小阈值位预算。我们还提出了在计算上有效且最佳的随机hadamard嵌入到下限的轻度对数因子。我们的模型量化策略可以概括,我们通过将方法和上限扩展到两层relu神经网络以进行非线性回归来显示其功效。数值模拟表明,我们提出的方案的性能得到改善,以及其与下限的亲密关系。
translated by 谷歌翻译
我们考虑一个集中检测问题,传感器对集中式融合中心进行嘈杂的测量和间歇性连接。传感器可以在预定的传感器集群内本地协作,并融合它们的噪声传感器数据,以达到每个簇中检测到的事件的公共常见估计。每个传感器集群的连接性是间歇性的,并且取决于传感器到融合中心的可用通信机会。在接收到所有连接的传感器集群的估计后,融合中心熔化所接收的估计,以对部署区域进行最终确定事件的发生。我们将该混合通信方案称为云集群架构。我们提出了一种用于优化每个群集的决策规则的方法,并分析由混合动力方案产生的预期检测性能。我们的方法是易行的并且解决了异构传感器和集群检测质量,其通信机会的异质性以及损失功能的非凸起引起的高计算复杂性。我们的分析表明,在用云的低传感器通信概率的情况下,聚类传感器为噪声提供弹性。对于较大的簇,即使使用我们的云集群架构,甚至可以获得低通信概率的检测性能的急剧提高。
translated by 谷歌翻译
当人类彼此合作时,他们经常通过观察他人来做出决定,并考虑到他们的行为可能在整个团队中的后果,而不是贪婪地做到最好的事情。我们希望我们的AI代理商通过捕获其合作伙伴的模型来有效地以类似的方式协作。在这项工作中,我们提出并分析了分散的多武装强盗(MAB)问题,耦合奖励作为更一般的多代理协作的抽象。我们展示了当申请分散的强盗团队时单代理最佳MAB算法的NA \“IVE扩展失败。相反,我们提出了一个合作伙伴感知策略,用于联合连续决策,这些策略扩展了众所周知的单王子的上置信度算法。我们分析表明,我们的拟议战略达到了对数遗憾,并提供了涉及人类AI和人机协作的广泛实验,以验证我们的理论发现。我们的结果表明,拟议的合作伙伴感知策略优于其他已知方法,以及我们的人类主题研究表明人类宁愿与实施我们合作伙伴感知战略的AI代理商合作。
translated by 谷歌翻译
我们在限制下研究了一阶优化算法,即使用每个维度的$ r $ bits预算进行量化下降方向,其中$ r \ in(0,\ infty)$。我们提出了具有收敛速率的计算有效优化算法,与信息理论性能匹配:(i):(i)具有访问精确梯度甲骨文的平稳且强烈的符合目标,以及(ii)一般凸面和非平滑目标访问嘈杂的亚级别甲骨文。这些算法的关键是一种多项式复杂源编码方案,它在量化它之前将矢量嵌入随机子空间中。这些嵌入使得具有很高的概率,它们沿着转换空间的任何规范方向的投影很小。结果,量化这些嵌入,然后对原始空间进行逆变换产生一种源编码方法,具有最佳的覆盖效率,同时仅利用每个维度的$ r $ bits。我们的算法保证了位预算$ r $的任意值的最佳性,其中包括次线性预算制度($ r <1 $),以及高预算制度($ r \ geq 1 $),虽然需要$ o \ left(n^2 \右)$乘法,其中$ n $是尺寸。我们还提出了使用Hadamard子空间对这种编码方案的有效放松扩展以显着提高梯度稀疏方案的性能。数值模拟验证我们的理论主张。我们的实现可在https://github.com/rajarshisaha95/distoptconstrocncomm上获得。
translated by 谷歌翻译
传统上依赖于时间序列推断的方法的设计统计模型,其描述了所需期望序列和观察到的序列之间的关系。已经得出了广泛的基于模型的算法,以使用表示基础分布的因子图上的递归计算来实现可控复杂性的推断。替代模型 - 不可知方法利用机器学习(ML)方法。在这里,我们提出了一个框架,它将基于模型的算法和数据驱动ML工具组合起来的静止时间序列。在所提出的方法中,开发了神经网络以分别学习描述时间序列分布的因子图的特定组件,而不是完全推理任务。通过利用该分布的静止性质,可以将所得方法应用于不同时间持续时间的序列。学习的因子图可以使用紧凑的神经网络来实现使用小型训练集的培训,或者可选地用于改进现有的深度推理系统。我们介绍了一种基于学习的静止因子图的推理算法,其学习从标记数据实现总和 - 产品方案,并且可以应用于不同长度的序列。我们的实验结果表明了所提出的学习因素图表学习从睡眠级数据集进行睡眠阶段检测的小型训练集的精确推断的能力,以及与未知通道的数字通信中的符号检测。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
Explainability is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains. Much has been written about the topic, yet explainability still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that is a synthesis of what can be found in the literature. We recognize that explanations are not atomic but the product of evidence stemming from the model and its input-output and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's decision-making) and plausibility (i.e., how much the explanation looks convincing to the user). Using our proposed theoretical framework simplifies how these properties are ope rationalized and provide new insight into common explanation methods that we analyze as case studies.
translated by 谷歌翻译